Conversation
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
…ndle this) Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
| @@ -451,6 +451,8 @@ | |||
| | `--model MODEL_NAME` | Model name to launch with. If omitted, you will be prompted to select one. | No | | |||
There was a problem hiding this comment.
server log
2026-04-01 15:16:07.111 [Info] (Router) Model loaded successfully. Total loaded: 1
2026-04-01 15:16:07.111 [Info] (Server) Model loaded successfully: user.Qwen3.5-35B-A3B-NoThinking
2026-04-01 15:16:07.111 [Info] (Server) POST /api/v1/responses - Streaming
2026-04-01 15:16:07.119 [Error] (Process) srv operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵ {{- raise_exception('System message must be at the beginnin...\n ^\nError: Jinja Exception: System message must be at the beginning.","type":"server_error"}}
2026-04-01 15:16:07.119 [Info] (Process) srv log_server_r: done request: POST /v1/responses 127.0.0.1 500
2026-04-01 15:16:07.119 [Error] (StreamingProxy) Backend returned error: 500
2026-04-01 15:16:07.333 [Info] (Server) Model already loaded: user.Qwen3.5-35B-A3B-NoThinking
2026-04-01 15:16:07.333 [Info] (Server) POST /api/v1/responses - Streaming
2026-04-01 15:16:07.340 [Error] (Process) srv operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵ {{- raise_exception('System message must be at the beginnin...\n ^\nError: Jinja Exception: System message must be at the beginning.","type":"server_error"}}
2026-04-01 15:16:07.340 [Info] (Process) srv log_server_r: done request: POST /v1/responses 127.0.0.1 500
2026-04-01 15:16:07.340 [Error] (StreamingProxy) Backend returned error: 500
2026-04-01 15:16:07.770 [Info] (Server) Model already loaded: user.Qwen3.5-35B-A3B-NoThinking
2026-04-01 15:16:07.770 [Info] (Server) POST /api/v1/responses - Streaming
2026-04-01 15:16:07.776 [Error] (Process) srv operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵ {{- raise_exception('System message must be at the beginnin...\n ^\nError: Jinja Exception: System message must be at the beginning.","type":"server_error"}}
2026-04-01 15:16:07.776 [Info] (Process) srv log_server_r: done request: POST /v1/responses 127.0.0.1 500
2026-04-01 15:16:07.776 [Error] (StreamingProxy) Backend returned error: 500
2026-04-01 15:16:08.660 [Info] (Server) Model already loaded: user.Qwen3.5-35B-A3B-NoThinking
2026-04-01 15:16:08.660 [Info] (Server) POST /api/v1/responses - Streaming
2026-04-01 15:16:08.666 [Error] (Process) srv operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵ {{- raise_exception('System message must be at the beginnin...\n ^\nError: Jinja Exception: System message must be at the beginning.","type":"server_error"}}
2026-04-01 15:16:08.666 [Info] (Process) srv log_server_r: done request: POST /v1/responses 127.0.0.1 500
2026-04-01 15:16:08.666 [Error] (StreamingProxy) Backend returned error: 500
2026-04-01 15:16:10.128 [Info] (Server) Model already loaded: user.Qwen3.5-35B-A3B-NoThinking
2026-04-01 15:16:10.128 [Info] (Server) POST /api/v1/responses - Streaming
2026-04-01 15:16:10.134 [Error] (Process) srv operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵ {{- raise_exception('System message must be at the beginnin...\n ^\nError: Jinja Exception: System message must be at the beginning.","type":"server_error"}}
2026-04-01 15:16:10.134 [Info] (Process) srv log_server_r: done request: POST /v1/responses 127.0.0.1 500
2026-04-01 15:16:10.134 [Error] (StreamingProxy) Backend returned error: 500
2026-04-01 15:16:13.647 [Info] (Server) Model already loaded: user.Qwen3.5-35B-A3B-NoThinking
2026-04-01 15:16:13.647 [Info] (Server) POST /api/v1/responses - Streaming
2026-04-01 15:16:13.652 [Error] (Process) srv operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵ {{- raise_exception('System message must be at the beginnin...\n ^\nError: Jinja Exception: System message must be at the beginning.","type":"server_error"}}
2026-04-01 15:16:13.652 [Info] (Process) srv log_server_r: done request: POST /v1/responses 127.0.0.1 500
2026-04-01 15:16:13.652 [Error] (StreamingProxy) Backend returned error: 500
Codex isnt working for me, any tips?
There was a problem hiding this comment.
Appears to be related to: ggml-org/llama.cpp#20733. There's a draft PR open to address this: ggml-org/llama.cpp#21174.
Looks like an upstream issue with the qwen3.5 model family. I've been testing with GLM 4.7 Flash and Nemotron 3 Nano, which probably explains why I haven't hit this issue.
There was a problem hiding this comment.
Thanks! I'll try again with GLM. Should Qwen3.5 not be a recommended recipe for Codex then?
There was a problem hiding this comment.
Works for me with GLM! So yeah the recommended recipe list just needs to be adjusted.
There was a problem hiding this comment.
Works for me with GLM! So yeah the recommended recipe list just needs to be adjusted.
Interestingly, Qwen 3 Coder Next works fine for me, issue is mainly limited to the Qwen 3.5 family. Will remove them from the recommended list.
For those that have already downloaded Qwen 3.5 models, do you think we should add a warning for them as well?
| lemonade launch AGENT [--model MODEL_NAME] [options] | ||
| ``` | ||
|
|
||
| | Option/Argument | Description | Required | |
There was a problem hiding this comment.
"This enables features such as session resumption in both Claude Code and Codex."
Can you provide instructions for how to do this? If I use the regular resume command in Codex it resumes, but with ChatGPT as the model.
There was a problem hiding this comment.
To resume from a previous session, command syntax would look something like this, lemonade launch codex --agent-args "resume SESSION_ID", this will automatically pick up the lemonade provider as default and should not route you to the OpenAI provider.
I will add an entry in the docs for this.
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
…odels Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>

Resolves #1504
Also introduces
--agent-argsallowing users to pass in arguments to their agents, similar to how we used to do--llamacpp-args. This enables features such as session resumption in both Claude Code and Codex.